perm filename DENNET[W85,JMC] blob sn#787261 filedate 1985-02-25 generic text, type C, neo UTF8
COMMENT āŠ—   VALID 00002 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00002 00002	dennet[w85,jmc]		Reaction to Dennett's 1982 Cognitive Whels The
C00012 ENDMK
CāŠ—;
dennet[w85,jmc]		Reaction to Dennett's 1982 Cognitive Whe;ls; The
			Frame Problem in AI

1. It is always pleasant to find philosophers in a helpful frame of mind.
Perhaps Dennett's point about the helpfulnes of AI to philosophy can
be sharpened a little.  How about

Proper understanding of mind requires thinking about actual mechanisms
to perform the mental operations.  It isn't necessary or even desirable
that writing programs be an important component of a philosopher's
or other theorist's activity.  It is necessary to be clear about
whether one is proposing an actual mechanism or merely describing
a few characteristics of what one hopes is a mechanism.  Dennett's
example of the magician sawing the woman in half is the best I've
seen.  The experience of writing a few programs may be necessary
to acquire the necessary culture.

Well that ran on.

2. Dennett's ``walking encyclopedia'' metaphor is defective in an
important way.  Besides the knowledge there has to be a mechanism.
An encyclopedia with just the knowledge won't do anything --- not
even walk.  There is a dilemma here.

One one hand, the active mechanism can be extremely simple, a Lisp
or Prolog interpreter, a bare computer, or even a universal Turing
machine.  In that case, however, there has to be a program, and it
is problematical how to regard the program itself as knowledge, i.e.
as consisting of assertions.

It seems better to regard the active mechanism as including a
``reasoning program'' or ``knowledge interpreter'', i.e. a mechanism
that reasons from the knowledge base to conclusions about what it
should do and then does it.  This is essentially the proposal of my
1958 ``Programs with Common Sense''.  It has the philosophical
advantage that the goals and policy are all expressed by sentences
and what the robot should do is a consequence of these sentences,
and it does it.  However, such a reasoning program doesn't exist
at present, building one faces poorly understood difficulties.

Most existing programs resemble STRIPS (Fikes and Nilsson 1971) in that
they use logical formulas, or data structures that can be interpreted
as logical formulas making assertions about the world, to express
facts, both general facts and facts about the particular situation.
However, the mechanisms that generate new facts cannot be described
as just drawing logical conclusions from the old.  Regarded as such,
they are often wrong, i.e. they will draw false conclusions from
true premisses and fail to draw conclusions that do follow.  They
work, when they do, because the collection of facts they are
given is debugged.  The resulting systems are unstable with regard
to the addition of truths, i.e. a system that draws only true
conclusions may begin drawing false conclusions when truths
are added.  The fact that systems are built this way is not just
a mistake.  Performance in a limited domain is achieved at the
price of specialization.  This specialization of function is
excessive even from the point of view of the system builders,
but the present expert system technology doesn't permit doing
a lot better.  The phenomenon is called ``brittleness'' in AI
folklore.

It is possible to make the problem of the mechanism of thought
easier to understand by building only a ``Missouri Program''.  It
only checks reasoning that the robot should do something, i.e.
it contains the epistemological mechanism and is supposed to
be driven by a heuristic mechanism.  Such a program in Dennet's
midnight snack example could check the argument that a snack
can be obtained by the steps Dennett mentions involving the
refrigerator.

Remark: The most repugnant feature of Dennett's paper is its
acceptance of including mayonnaise in turkey sandwiches.

Even the Missouri program isn't easy.

To recapitulate this section, the walking encyclopedia needs to
be supplemented by some discussion, however short, of the mechanism
that uses the facts.  Incidentally, a similar omission marks
Douglas Hofstadter's version of the dialog about modus ponens
between Achilles and the tortoise.  The tortoise forces Achilles
into an infinite regress of justifications for modus ponens.  However,
Achilles should have said at the beginning.  The correctness of
modus ponens can be asserted at any metalevel, but besides the
justifications for using it, I must tell you that the cause of my
using it is the way I'm built.  A computer program can in principle
know more about how it is built than a person, and it can argue
that it is correctly built, but it is relevant that it is built
in a certain way and not merely that it ought to be.

3. The definition of the frame problem is unclear.  McDermott
mentions a narrow frame problem made to go away by suitable
non-monotonic reasoning and a more general problem.  I propose
using the term ``frame problem'' for the narrow one and inventing
another more descriptive name for the generalization.  I agree
with McDermott that the frame problem proper is essentially solved
by formalizing non-monotonic reasoning, and I present some
details in (McCarthy 1984).

4. The history on page 19 is a smoothed version.  The earliest
planning system using logic was Fischer Black's, and it was based
on the ideas of my 1958 paper and preceded the development of resolution.

5. We called ``frame axioms'' those axioms that said what didn't change.

6. p25 - Ceteris paribus reasoning isn't ``destinctively human'' any more
than logic is.

7. p26 - There is an important distinction between having the designer
``select and improved'' design, and having the designer or someone
else tell the robot relevant facts.  A mere informant need not
understand much about how the informee works.

8. I agree with almost all the rest of the paper.  However, I doubt
that the availability of massive parallel machines will by itself
generate new ideas.  A doubt more relevant to DARPA than to Dennett.

9. It seems that Dennett uses the term ``frame problem'' for the
more general problem of expressing the facts about the consequences
of actions and events.  McDermott calls it the ``prediction problem'',
and I think that's a good name.  Maybe it should be called the
problem of formalizing qualitative prediction information.